Search Results: "nicolas"

19 February 2015

Nicolas Dandrimont: We need your help to make GSoC and Outreachy in Debian a success this summer!

Hi everyone, A quick announcement: Debian has applied to the Google Summer of Code, and will also participate in Outreachy (formerly known as the Outreach Program for Women) for the Summer 2015 round! Those two mentoring programs are a great way for our project to bootstrap new ideas, give an new impulse to some old ones, and of course to welcome an outstanding team of motivated, curious, lively new people among us. We need projects and mentors to sign up really soon (before February 27th, that s next week), as our project list is what Google uses to evaluate our application to GSoC. Projects proposals should be described on our wiki page. We have three sections:
  1. Coding projects with confirmed mentors are proposed to both GSoC and Outreachy applicants
  2. Non-Coding projects with confirmed mentors are proposed only to Outreachy applicants
  3. Project ideas without confirmed mentors will only happen if a mentor appears. They are kept on the wiki page until the application period starts, as we don t want to give applicants false hopes of being picked for a project that won t happen.
Once you re done, or if you have any questions, drop us a line on our mailing-list (soc-coordination@lists.alioth.debian.org), or on #debian-soc on OFTC. We also would LOVE to be able to welcome more Outreachy interns. So far, and thanks to our DPL, Debian has committed to fund one internship (US$6500). If we want more Outreachy interns, we need your help :). If you, or your company, have some money to put towards an internship, please drop us a line at opw@debian.org and we ll be in touch. Some of the successes of our Outreachy alumni include the localization of the Debian Installer to a new locale, improvements in the sources.debian.net service, documentation of the debbugs codebase, and a better integration of AppArmor profiles in Debian. Thanks a lot for your help!

11 September 2014

Sylvestre Ledru: Rebuild of Debian using Clang 3.5.0

Clang 3.5.0 has just been released. A new rebuild has been done highlight the progress to get Debian built with clang. tl;dr: Great progress. We decreased from 9.5% to 5.7% of failures. Full results are available on http://clang.debian.net At time of the rebuild with 3.4.2, we had 2040 packages failing to build with clang. With 3.5.0, this dropped to 1261 packages. Fixes With Arthur Marble and Alexander Ovchinnikov, both GSoC students, we worked on various ways to decrease the number of errors. Upstream fixes First, the most obvious way, we fixed programming bugs/mistakes in upstream sources. Basically, we took categories of failure and fixed issues one after the other. We started with simple bugs like 'Wrong main declaration', 'non-void function should return a value' or 'Void function should not return a value'.

They are trivial to fix. We continued with harder fixes like ' Undefined reference' or 'Variable length array for a non POD (plain old data) element'.

So, besides these one, we worked on:
In total, we reported 295 bugs with patches. 85 of them have been fixed (meaning that the Debian maintainer uploaded a new version with the fix).

In parallel, I think that the switch by FreeBSD and Mac OS X to Clang also helped to fix various issues by upstreams. Hacking in clang As a parallel approach, we started to implement a suggestion from Linus Torvalds and a few others. Instead of trying to fix all upstream, where we can, we tried to update clang to improve the gcc compatibility.

gcc has many flags to disable or enable optimizations. Some of them are legacy, others have no sense in clang, etc. Instead of failing in clang with an error, we create a new category of warnings (showing optimization flag '%0' is not supported) and moved all relevant flags into it. Some examples, r212805, r213365, r214906 or r214907

We also updated clang to silent some useless arguments like -finput_charset=UTF-8 (r212110), clang being UTF-8 compliant.

Finally, we worked on the forwarding of linker flags. Clang and gcc have a very different behavior: when gcc does not know an argument, it is going to forward the argument to the linker. Clang, in this case, is going to reject the argument and fail with an error. In clang, we have to explicitly declare which arguments are going to be transfer to the linker. Of course, the correct way to pass arguments to the linker is to use -Xlinker or -Wl but the Debian rebuild proved that these shortcuts are used. Two of these arguments are now forwarded: New errors Just like in other releases, new warnings are added in clang. With (bad) usage of -Werror by upstream software, this causes new build failures: I also took the opportunity to add some further categorizations in the list of errors. Some examples: Next steps The Debile project being close to ready with Cl ment Schreiner's GSoC, we will now have an automatic and transparent way to rebuild packages using clang. Conclusion As stated, we can see a huge drop in term of number of failures over time:
Hopefully, Clang getting better and better, more and more projects adopting it as the default compiler or as a base for plugin/extension developments, this percentage will continue to decrease.
Having some kind of release goal with clang for Jessie+1 can now be considered as potentially reachable. Want to help? There are several things which can be done to help: Acknowledgments Thanks to David Suarez for the rebuilds of the archive, Arthur Marble and Alexander Ovchinnikov for their GSoC works and Nicolas S velin-Radiguet for the few fixes.

8 September 2014

Jaldhar Vyas: Debconf 14 - Days 1 and 2

Unfortunately I was not able to attend debconf this year but thanks to the awesome video team the all the talks are available for your viewing pleasure. In order to recreate an authentic Portland experience, I took my laptop into the shower along with a vegan donut and had my children stand outside yelling excerpts from salon.com in whiny Canadianesque accents. Here are some notes I took as I watched the talks. Welcome Talk
Debian in the Dark Ages of Free software - Stefan Zacchiroli Weapons of the Geek - Gabriella Coleman bugs.debian.org -- Database Ho! - Don Armstrong Grub Ancient and Modern - Colin and Watson One year of fedmsg in Debian - Nicolas Dandrimont Coming of Age: My Life with Debian - Christine Spang Status report of the Debian Printing Team - Didier Raboud

31 August 2014

Alexander Wirt: cgit on alioth.debian.org

Recently I was doing some work on the alioth infrastructure like fixing things or cleaning up things. One of the more visible things I done was the switch from gitweb to cgit. cgit is a lot of faster and looks better than gitweb. The list of repositories is generated every hour. The move also has the nice effect that user repositories are available via the cgit index again. I don t plan to disable the old gitweb, but I created a bunch of redirect rules that - hopefully - redirect most use cases of gitweb to the equivalent cgit url. If I broke something, please tell me, if I missed a common use case, please tell me. You can usually reach me on #alioth@oftc or via mail (formorer@d.o) People also asked me to upload my cgit package to Debian, the package is now waiting in NEW. Thanks to Nicolas Dandrimont (olasd) we also have a patch included that generates proper HTTP returncodes if repos doesn t exist.

9 July 2014

Mike Gabriel: Cooperation between X2Go and TheQVD

I recently got in contact with Nicolas Arenas Alonso and Nito Martinez from the Quindel group (located in Spain) [1]. Those guys bring forth a software product called TheQVD (The Quality Virtual Desktop) [2]. The project does similar things that X2Go does. In fact, they use NX 3.5 from NoMachine internally like we do in X2Go. Already a year ago, I noticed their activity on TheQVD and thought.. "Ahaaa!?!". Now, a couple of weeks back we received a patch for libxcomp3 that fixes an FTBFS (fails to build from source) for nx-libs-lite against Android [3]. read more

7 May 2014

Julien Danjou: Making of The Hacker's Guide to Python

As promised, today I would like to write a bit about the making of The Hacker's Guide to Python. It has been a very interesting experimentation, and I think it is worth sharing it with you. The inspiration All started out at the beginning of August 2013. I was spending my summer, as the rest of the year, hacking on OpenStack. As years passed, I got more and more deeply involved in the various tools that we either built or contributed to within the OpenStack community. And I somehow got the feeling that my experience with Python, the way we used it inside OpenStack and other applications during these last years was worth sharing. Worth writing something bigger than a few blog posts. The OpenStack project is doing code reviews, and therefore so did I for almost two years. That inspired a lot of topics, like the definitive guide to method decorators that I wrote at the time I started the hacker's guide. Stumbling upon the same mistakes or misunderstanding over and over is, somehow, inspiring. I also stumbled upon Nathan Barry's blog and book Authority which were very helpful to get started and some sort of guideline. All of that brought me enough ideas to start writing a book about Python software development for people already familiar with the language. The writing The first thing I started to do is to list all the topics I wanted to write about. The list turned out to have subjects that had no direct interest for a practical guide. For example, on one hand, very few developers know in details how metaclasses work, but on the other hand, I never had to write a metaclass during these last years. That's the kind of subject I decided not to write about, dropped all subjects that I felt were not going to help my reader to be more productive. Even if they could be technically interesting.
Then, I gathered all problems I saw during the code reviews I did during these last two years. Some of them I only recalled in the days following the beginning of that project. But I kept adding them to the table of contents, reorganizing stuff as needed. After a couple of weeks, I had a pretty good overview of the contents that there I will write about. All I had to do was to fill in the blank (that sounds so simple now). The entire writing of the took hundred hours spread from August to November, during my spare time. I had to stop all my other side projects for that. The interviews While writing the book, I tried to parallelize every thing I could. That included asking people for interviews to be included in the book. I already had a pretty good list of the people I wanted to feature in the book, so I took some time as soon as possible to ask them, and send them detailed questions. I discovered two categories of interviewees. Some of them were very fast to answer ( 1 week), and others were much, much slower. A couple of them even set up Git repositories to answer the questions, because that probably looked like an entire project to them. :-) So I had to not lose sight and kindly ask from time to time if everything was alright, and at some point started to kindly set some deadline. In the end, the quality of the answers was awesome, and I like to think that was because I picked the right people! The proof-reading Once the book was finished, I somehow needed to have people proof-reading it. This was probably the hardest part of this experiment. I needed two different types of reviews: technical reviews, to check that the content was correct and interesting, and language review. That one is even more important since English is not my native language. Finding technical reviewers seemed easy at first, as I had ton of contacts that I identified as being able to review the book. I started by asking a few people if they would be comfortable reading a simple chapter and giving me feedbacks. I started to do that in September: having the writing and the reviews done in parallel was important to me in order to minimize latency and the book's release delay. All people I contacted answered positively that they would be interested in doing a technical review of a chapter. So I started to send chapters to them. But in the end, only 20% replied back. And even after that, a large portion stopped reviewing after a couple of chapters. Don't get me wrong: you can't be mad at people not wanting to spend their spare time in book edition like you do. However, from the few people that gave their time to review a few chapters, I got tremendous feedback, at all level. That's something that was very important and that helped a lot getting confident. Writing a book alone for months without having anyone taking a look upon your shoulder can make you doubt that you are creating something worth it. As far as English proof-reading, I went ahead and used ODesk to recruit a professional proof-reader. I looked for people with the right skills: a good English level (being a native English speaker at least), be able to understand what the book was about, and being able to work with correct delays. I had mixed results from the people I hired, but I guess that's normal. The only error I made was not to parallelize those reviews enough, so I probably lost a couple of months on that. The toolchain
While writing the book, I did a few breaks to build a toolchain. What I call a toolchain is set of tools used to render the final PDF, EPUB and MOBI files of the guide. After some research, I decided to settle on AsciiDoc, using the DocBook output, which is then being transformed to LaTeX, and then to PDF, or either to EPUB directly. I realy on Calibre to convert the EPUB file to MOBI. It took me a few hours to do what I wanted, using some magic LaTeX tricks to have a proper render, but it was worth it and I'm particularly happy with the result. For the cover design, I asked my talented friend Nicolas to do something for me, and he designed the wonderful cover and its little snake! The publishing Publishing in an interesting topic people kept asking me about. This is what I had to answer a few dozens of time: I never had any plan for asking an editor to publish this book. Nowadays, asking an editor to publish a book feels to me like asking a major company to publish a CD. It feels awkward. However, don't get me wrong: there can be a few upsides of having an editor. They will find reviewers and review your book for you. Having the book review handled for you is probably a very good thing, considering how it was hard to me to get that in place. It can be especially important for a technical book. Also, your book may end up in brick and mortar stores and be part of a collection, both improving visibility. That may improve your book's selling, though the editor and all the intermediaries are going to keep the largest amount of the money anyway. I've heard good stories about people using Gumroad to sell electronic contents, so after looking for competitors in that market, I picked them. I also had the idea to sell the book with Bitcoins, so I settled on Coinbase, because they have a nice API to do that. Setting up everything was quite straight-forward, especially with Gumroad. It only took me a few hours to do so. Writing the Coinbase application took a few hours too. My initial plan was to only sell online an electronic version. On the other hand, since I kept hearing that a printed version should exist, I decided to give it a try. I chose to work with Lulu because I knew people using it, and it was pretty simple to set up. The launch Once I had everything ready, I built the selling page and connected everything between Mailchimp, Gumroad, Coinbase, Google Analytics, etc. Writing the launch email was really exciting. I used Mailchimp feature to send the launch mail in several batches, just to have some margin in case of a sudden last minute problem. But everything went fine. Hurrah! I distributed around 200 copies of the ebook in the first 48 hours, for about $5000. That covered all the cost I had from the writing the book, and even more, so I was already pretty happy with the launch.
Retrospective In retrospect, something that I didn't do the best way possible is probably to build a solid mailing list of people interested, and to build an important anticipation and incentive to buy the book at launch date. My mailing list counted around 1500 people subscribed because they were interested in the launch of the book subscribed; in the end, probably only 10-15% of them bought the book during the launch, which is probably a bit lower than what I could expect. But more than a month later, I distributed in total almost 500 copies of the book (including physical units) for more than $10000, so I tend to think that this was a success. I still sell a few copies of the book each weeks, but the number are small compared to the launch. I sold less than 10 copies of the ebook using Bitcoins, and I admit I'm a bit disappointed and surprised about that. Physical copies represent 10% of the book distribution. It's probably a lot lower than most people that pushed me to do it thought it would be. But it is still higher of what I thought it would be. So I still would advise to have a paperback version of your book. At least because it's nice to have it in your library.
I only got positive feedbacks, a few typo notices, and absolutely no refund demand, which I really find amazing. The good news is also that I've been contacted with a couple of Korean and Chinese editors to get the book translated and published in those countries. If everything goes well, the book should be translated in the upcoming months and be available on these markets in 2015! If you didn't get a copy, it's still time to do so!

22 March 2014

Nicolas Dandrimont: Debian proposals in GSoC 2014

The GSoC student application period is over, and the last two days were pretty interesting. For a few years now, Olly Betts has provided us with a spreadsheet to graph the number of applicants to an organization over time. Here s the graph for Debian this year: Debian GSoC proposals, 2014 edition (Historical graphs: 2013, 2012. Spreadsheet available from Olly s blog) On Wednesday, I was thinking hmm, 30 applicants, this is a slow year . Well, the number of proposals more than doubled in the last two days, to conclude on a whooping 68 applications! The last one was submitted just three seconds before the deadline If you want to take a look at the proposals, head over to the Debian wiki. Time to get on reviewing! The final student acceptances will be published in just less than a month, on April 21st.

1 September 2013

Ritesh Raj Sarraf: Laptop Mode Tools 1.64

I just released Laptop Mode Tools @ version 1.64. And am pleased to introduce the new graphical utility to toggle individual power saving modules in the package. The GUI is written using the PyQT Toolkit and the options in the GUI are generated at runtime, based on the list of available power saving modules. Apart from the GUI configuration tool, this release also includes some bug fixes:
  • Don't touch USB Controller power settings. The individual devices, when plugged in, while on battery, inherit the power settings from the USB controller
  • start-stop-programs: add support for systemd. Thanks to Alexander Mezin
  • Replace hardcoded path to udevadm with "which udevadm". Thanks to Alexander Mezin
  • Honor .conf files only. Thanks to Sven K hler
  • Make '/usr/lib' path configurable. This is especially useful for systems that use /usr/lib64, or /lib64 directly. Thanks to Nicolas Braud-Santoni
  • Don't call killall with the -g argument. Thanks to Murray Campbell
  • Fix RPM Spec file build errors
The Debian package will follow soon. I don't intend to introduce a new package for the GUI tool because the source is hardly 200 lines. So the dependencies (pyqt packages) will go as Recommeds or Suggests

AddThis:

Categories:

Keywords:

8 August 2013

Nicolas Dandrimont: Hello from DebCamp

DebConf flag (minus the wind) Small update as someone was complaining about the lack of pictures from DebCamp on planet. Not to worry, everything is going fine, and some of the most important stuff is ready s3kr3t You can see a few pictures from the gallery. The view from the venue is quite outstanding (it was better when there was some sun on Tuesday, but my camera battery was out ). On my TODO-list: See you there!

14 July 2013

Nicolas Dandrimont: Bootstrapping fedmsg for Debian

As you might (or might not) know, this summer, I have taken on mentoring of a GSoC project by Simon Chopin (a.k.a. laarmen) which goal is to bring fedmsg, the Fedora Infrastructure message bus, to Debian. Most of the work I ll be talking about here is Simon s work, please send all the praise towards him (I can take the blame, though). What is this about? As the project proposal states, the idea is to provide Debian with a unified, real-time, and open mechanism of communication between its services. This communication bus would allow anyone, anywhere, to start consuming messages and reacting to events happening in Debian s infrastructure: When we told upstream about our plan of adapting fedmsg to work on Debian, they were thrilled. And they have been very supportive of the project. How is the project going? Are you excited? I know I m excited. yep, he's excited too Well, the general idea was easy enough, but the task at hand is a challenge. First of all, fedmsg has a lot of (smallish) dependencies, most of them new to Debian. Thanks to Simon s work during the bonding period, and thanks to paultag s careful reviews, the first batch of packages (the first dependency level, comprising kitchen, bunch, m2ext, grapefruit, txws, txzmq and stomper) is currently sitting in the NEW queue. The four remaining packages (fabulous, moksha.common, moksha.hub and fedmsg proper) are mostly ready, waiting in the Debian Python Module Team SVN repository for a review and sponsorship. While we re waiting for the packages to trickle into Debian, Simon is not twiddling his thumbs. Work has taken place on a few fronts: fedmsging mentors.debian.net Package backports mentors.debian.net was chosen because I m an admin and could do the integration quickly. That involved backporting the eleven aforementioned packages, plus zeromq3 and python-zmq (that only have TCP_KEEPALIVE on recent versions), to wheezy, as that s what the mentors.d.n host is running. (Also, python-zmq needs a new-ish cython to build so I had to backport that too). Thankfully, those were no-changes backports, that were easily scripted, using a pbuilder hook to allow the packages to depend on previously built packages. I have made a wheezy package repository available here. It s signed with my GnuPG key, ID 0xB8E5087766475AAF, which should be fairly well connected. Code changes After Simon s initial setup of debexpo (which is not an easy task), the code changes have been fairly simple (yes, this is just a proof of concept). You can see them on top of the live branch on debexpo s sources. I finally had the time to make them live earlier this week, and mentors.debian.net has been sending messages on Debian s fedmsg bus ever since. Deployment mentors.d.n sends its messages on five endpoints, tcp://mentors.debian.net:3000 through tcp://mentors.debian.net:3004. That is one endpoint per WSGI worker, plus one for the importer process(es). You can tap in directly, by following the instructions below. debmessenger Debmessenger is the stop-gap email-to-fedmsg bridge that Simon is developing. The goal is to create some activity on the bus without disrupting or modifying any infrastructure service. It s written in hy, and it leverages the existing Debian-related python modules to do its work, using inotify to react when a mail gets dropped in a Maildir. Right now, it s supposed to understand changes mails (received from debian-devel-changes) and bugs mail (from debian-bugs-dist). I ll work on deploying an instance of debmessenger this weekend, to create some more traffic on the bus. Reliability of the bus I suggested using fedmsg as this was something that already existed, and that solved a problem identical to the one we wanted to tackle (open interconnection of a distribution s infrastructure services). Reusing a piece of infrastructure that already works in another distro means that we can share tools, share ideas, and come up with solutions that we might not have considered when working alone. The drawback is that we have to either adapt to the tool s idiosyncrasies, or to adapt the tool to our way of working. One of the main points raised by DSA when the idea of using fedmsg was brought up, was that of reliability. Debian s infrastructure is spread in datacenters (and basements :D ) all over the world, and thus faces different challenges than Fedora s infrastructure, which is more tightly integrated. Therefore, we have to ensure that a critical consumer (say, a buildd) doesn t miss any message it would need for its operation (say, that a package got accepted). There has been work upstream, to ensure that fedmsg doesn t lose messages, but we need to take extra steps to make sure that a given consumer can replay the messages it has missed, should the need arise. Simon has started a discussion on the upstream mailing list, and is working on a prototype replay mechanism. Obviously, we need to test scenarios of endpoints dropping off the grid, hence the work on getting some activity on the bus. How can I take a look? a.k.a. Another one rides the bus A parisian bus built in 1932 (Picture Yves-Laurent Allaert, CC-By-SA v2.5 / GFDL v1.2 license) So, the bus is pretty quiet right now, as only two kinds of events are triggering messages: a new upload to mentors.debian.net, and a new comment on a package there. Don t expect a lot of traffic. However, generating some traffic is easy enough: just login to mentors.d.n, pick a package of mine (not much choice there), or a real package you want to review, and leave a comment. poof, a message appears. For the lazy Join #debian-fedmsg on OFTC, and look for messages from the debmsg bot. Current example output:
01:30:25 <debmsg> debexpo.voms-api-java.upload (unsigned) --
02:03:16 <debmsg> debexpo.ocamlbricks.comment (unsigned) --
(definitely needs some work, but it s a start) Listening in by yourself You need to setup fedmsg. I have a repository of wheezy packages and one of sid packages, signed with my GnuPG key, ID 0xB8E5087766475AAF. You can add them to a file in /etc/apt/sources.list.d like this:
deb http://perso.crans.org/dandrimont/fedmsg-<sid wheezy>/ ./
Then, import my GnuPG key into apt (apt-key add), update your sources (apt-get update), and install fedmsg (apt-get install python-fedmsg). The versions are << to anything real, so you should get the real thing as soon as it hits the archive. Finally, in /etc/fedmsg.d/endpoints.py, you can comment-out the Fedora entries, and add a Debian entry like this:
    "debian": [
        "tcp://fedmsg.olasd.eu:9940",
    ],
fedmsg.olasd.eu runs a fedmsg gateway connected to the mentors.d.n endpoints, and thus forwards all the mentors messages. It ll be connected to debmessenger as soon as it s running too. To actually see mesages, disable validate_signatures in /etc/fedmsg.d/ssl.py, setting it to False. The Debian messages aren t signed yet (it s on the roadmap), and we don t ship the Fedora certificates so we can t authenticate their messages either. Finally, you can run fedmsg-tail --really-pretty in a terminal. As soon as there s some activity, you should get that kind of output (color omitted):
 
  "i": 1, 
  "msg":  
    "version": "2.0.9-1.1", 
    "uploader": "Emmanuel Bourg <ebourg@apache.org>"
   , 
  "topic": "org.debian.dev.debexpo.voms-api-java.upload", 
  "username": "expo", 
  "timestamp": 1373758221.491809
 
Enjoy real-time updates from your favorite piece of infrastructure! What s next? While Simon continues working on reliability, and gets started on message signing according to his schedule, I ll take a look at deploying the debmessenger bridge, and making the pretty-printer outputs useful for our topics. There will likely be some changes to the messages sent by debexpo, as we got some feedback from the upstream developers about making them work in the fedmsg tool ecosystem (datanommer and datagrepper come to mind). You can tune in to Simon s weekly reports on the soc-coordination list, and look at the discussions with upstream on the fedora messaging-sig list. You can also catch us on IRC, #debian-soc on OFTC. We re also hanging out on the upstream channel, #fedora-apps on freenode.

7 July 2013

Paul Tagliamonte: Hy 0.9.10 released

A huge release, the combined 0.9.9 and 0.9.10 releases (I made a mistake releasing) are now tagged and pushed to pypi. It features a number of enhancements and fixes, and is just an absolute thrill to play with. Thanks to the contributors this cycle:
Bob Tolbert Christopher Allan Webber Duncan McGreggor Guillermo Vaya Joe H. Rahme Julien Danjou Konrad Hinsen Morten Linderud Nicolas Dandrimont Ralph Moritz rogererens Thomas Ballinger Tuukka Turto
Outstanding! New features are now being considered for 0.9.11. Thanks!

19 May 2013

Nicolas Dandrimont: Hello world

Or rather, hello Planet! Here s a somewhat traditional introductory post. I m Nicolas Dandrimont, I m French, I m sysadmin in a grande cole, where I m mostly in charge of the GNU/Linux workstations and servers. In Debian, I m a DM, currently in the NM queue, so I might become a DD soon-ish. I am (rather inactively) co-maintaining a few packages. In my Debian career , I have been involved in OCaml packaging and Python packaging, although lately most of my time has been spent on Google Summer of Code (mentor for two mentors.debian.net projects in 2012, org admin for Debian in 2013), and on mentors.debian.net. In other free-software related projects, I own a RepRap 3D printer, and I grew some interest in the related software, e.g. Slic3r and printrun. There have been a lot of action in Fedora about packaging 3D-printing-related software, and it d be great to get a team together to work on that in Debian during the jessie release cycle. Consider this a call for interested parties :) Unrelatedly, paultag has tricked me into working on hy, which is way too much fun. Blame him if you feel that I have been inactive lately, this has been eating way too much of my free time ;) Hopefully I ll be able to make regular updates on the work I do in Debian and free software, so stay tuned!

25 April 2013

Julien Danjou: OpenStack Design Summit Havana, from a Ceilometer point of view

Last week was the OpenStack Design Summit in Portland, OR where we, developers, discussed and designed the new OpenStack release (Havana) coming up. The summit has been wonderful. It was my first OpenStack design summit -- even more as a PTL -- and bumping into various people I've never met so far and worked with online only was a real pleasure! <figure> <figcaption>Me and Nick ready to talk about Ceilometer new features.</figcaption> </figure> Nick Barcet from eNovance, our dear previous Ceilometer PTL, and myself, talked about Ceilometer and presented the work that bas been done for Grizzly, with some previews of what we'll like to see done for its Havana release. You can take a look at the slides if you're curious. Design sessions Ceilometer had his design sessions during the last days of the summit. We noted a lot of things and commented during the sessions in our Etherpads instances. The first session was a description of Ceilometer core architecture for interested people, and was a wonderful success considering that the room was packed. Our Doug Hellmann did a wonderful job introducing people to Ceilometer and answering question. <figure> <figcaption>Doug explaining Ceilometer architecture.</figcaption> </figure> The next session was about getting feedbacks from our users. We had a lot of surprise to discover wonderful real use-cases and deployments, like the CERN using Ceilometer and generating 2 GB of data per day! The following sessions ran on Thursday and were much more about new features discussion. A lot ot already existing blueprints were discussed and quickly validated during the first morning session. Then, Sandy Walsh introduced the architecture they use inside StackTach, so we can start thinking about getting things from it into Ceilometer. API improvements were discussed without surprises and with a good consensus on what needs to be done. The four following sessions that occupied a lot of the days were related to alarming. All were lead by Eoghan Glynn, from RedHat, who did an amazing job presenting the possible architectures with theirs pros and cons. Actually, all we had to do was to nod to his designs and acknowledge the plan on how to build this. That last two sessions were about discussing advanced models for billing where we got some interesting feedback from Daniel Dyer from HP, and then were a quick follow-up of the StackTach presentation from the morning session. Havana roadmap The list of blueprints targetting Havana is available and should be finished by next week. If you want to propose blueprints, you're free to do so and inform us about it so we can validate it. The same applies if you wish to implement one of them! API extension I do think the API version 2 is going to be heavily extended during this release cycle. We need more feature, like the group-by functionnality. Healthnmon In parallel of the design sessions, discussions took place in the unconference room with the Healthnmon developers to figure out a plan in order to merge some of their efforts into Ceilometer. They should provide a component to help Ceilometer supports more hypervisors than it currently does. Alarming Alarming is definitely going to be the next big project for Ceilometer. Today, Eoghan and I started building blueprints on alarming, centralized in a general blueprint. We know this is going to happen for real and very soon, thanks to the engagements of eNovance and RedHat who are commiting resources to this amazing project!

16 March 2013

Stefano Zacchiroli: bits from the DPL for February 2013 and a half

Dear project members, here's another report of DPL activities, this time for a period longer than usual (February + 1st week of March), so that the next one will be at the very end of the current DPL term. Highlights Appointments DPL helpers Two more DPL helpers IRC meetings have happened, minutes and logs of both are available. Assets Events Past At the beginning of February, I've attended FOSDEM 2013, together with many other Debian people. I didn't have any specific talk this year, but it's been a chance to talk F2F about several ongoing issues (see logs), and help mediating in some conflicts. I've also accepted the invitation to participate in the GNOME Advisory Board meeting, together with Laurent Bigonville of our GNOME team. No report of that has been prepared as of yet (sorry about that), but we have both reported "live" to the rest of the team on IRC. Future Miscellaneous A couple of months ago I've mentioned that I had filed an application, as Debian representative, to participate in a working table to define software procurement rules for the Italian public administration. Good news: my application has been accepted, together with those of other well-known FOSS communities and organizations (e.g. KDE, FSFE). I'll keep you posted of how it goes. Let's go back to elect a new DPL and release Wheezy now,
Cheers.
PS the day-to-day activity logs for February and March 2013 are available at the usual place master:/srv/leader/news/bits-from-the-DPL.txt.20130 2,3

29 January 2013

Julien Danjou: Going to FOSDEM 2013

For the first time, I'll be at FOSDEM 2013 in Brussels on Sunday 2nd February 2013. You'll find me probably hanging out in the cloud devroom where I'll talk about Ceilometer with my fellow developers Nick Barcet and Eoghan Glynn. I also hope I'll find time to take a peek at some other talks, like PostgreSQL's ones that Dimitri Fontaine will handle. See you there!

25 November 2012

Lucas Nussbaum: Half of the package maintainers are not DDs or DMs

During the Paris Mini-Debconf, Nicolas Dandrimont talked about The state of mentors.debian.net: GSoC and beyond. He said that Half of Debian s packages are maintained by sponsored maintainers. That statement was actually wrong, as he confirmed later. However, using a few UDD queries, I could come up with: Full UDD notes:
all packages in sid:
select source, version from sources_uniq where release = 'sid'
packages in sid known to upload_history:
select source, version from upload_history where
(source, version) in (select source, version from sources_uniq where release = 'sid')
packages that were uploaded by the changed_by person:
create temporary table sources_not_sponsored as select distinct source, version
 from upload_history, carnivore_keys, carnivore_emails
 where (source, version) in (select source, version from sources_uniq where release = 'sid')
 and fingerprint = key
 and carnivore_keys.id = carnivore_emails.id
 and carnivore_emails.email = changed_by_email;
packages not uploaded by the changed_by person:
create temp table uh_sid as select source, version, fingerprint, changed_by_email
from upload_history
where (source, version) in (select source, version from sources_uniq where release = 'sid');
create temp table uh_sid_sponsored as select source, version, fingerprint, changed_by_email from uh_sid
where (source, version) not in (select source, version from sources_not_sponsored);
list with sponsor login:
select distinct source, version, fingerprint, changed_by_email, login
from uh_sid_sponsored
left join carnivore_keys on fingerprint = key
left join carnivore_login on carnivore_keys.id = carnivore_login.id;
=> 4188 sponsored packages. some of them are in a strange state (changed_by is a DD, but uploaded by another DD). excluding those:
create temp table sponsored_but_dds as select distinct source, version, fingerprint, changed_by_email, login
from uh_sid_sponsored, carnivore_emails, carnivore_login
where changed_by_email = carnivore_emails.email
and carnivore_emails.id = carnivore_login.id;
create temp table really_sponsored as select distinct source, version, fingerprint, changed_by_email, login
from uh_sid_sponsored
left join carnivore_keys on fingerprint = key
left join carnivore_login on carnivore_keys.id = carnivore_login.id
where (source, version) not in (select source, version from sponsored_but_dds);
=> 3147 sponsored packages
select distinct changed_by_email from really_sponsored ;
=> 963 different sponsorees
select distinct changed_by_email from upload_history where
(source, version) in (select source, version from sources_uniq where release = 'sid');
=> 2015 distinct emails.
no DD amongst maintainer or uploader:
create temp table dds_emails as select email from carnivore_emails, carnivore_login
where carnivore_emails.id = carnivore_login.id;
select source, version, maintainer, uploaders from sources_uniq
where release='sid'
and maintainer_email not in (select * from dds_emails)
and not exists (select * from uploaders where release = 'sid' and sources_uniq.source = uploaders.source and sources_uniq.version = uploaders.version and email in (select * from dds_emails))
and maintainer_email != 'packages@qa.debian.org'
and (source, version) in (select source, version from really_sponsored);

6 November 2012

Julien Danjou: OpenStack France meetup #2

I was at the OpenStack France meetup 2 yesterday evening. This has been a wonderful evening, talking about OpenStack and all with around 30-40 people. I and Nick Barcet presented Ceilometer and have received some good feedbacks about it. We should also thanks Nebula, who sponsored the evening, and Erwan Gallen since it was nicely organized, and free beers are always enjoyable. For people interested, the slides of our Ceilometer presentations are available. This is a lighter and fresher version of the slides used by Nick and Doug at the OpenStack Design Summit. <iframe allowfullscreen="true" frameborder="0" height="479" mozallowfullscreen="true" src="https://docs.google.com/presentation/embed?id=1i30roVZp00Wvo46F4k5CT98sw2uMgaf5Lh3bSfiQ-Cg&amp;start=false&amp;loop=false&amp;delayms=3000" webkitallowfullscreen="true" width="600"></iframe>

20 October 2012

Vincent Bernat: Network lab with KVM

To experiment with network stuff, I was using UML-based network labs. Many alternatives exist, like GNS3, Netkit, Marionnet or Cloonix. All of them are great viable solutions but I still prefer to stick to my minimal home-made solution with UML virtual machines. Here is why: The use of UML had some drawbacks: However, UML features HostFS, a filesystem providing access to any part of the host filesystem. This is the killer feature which allows me to not use any virtual disk image and to get access to my home directory right from the guest. I discovered recently that KVM provided 9P, a similar filesystem on top of VirtIO, the paravirtualized IO framework.

Setting up the lab The setup of the lab is done with a single self-contained shell file. The layout is similar to what I have done with UML. I will only highlight here the most interesting steps.

Booting KVM with a minimal kernel My initial goal was to experiment with Nicolas Dichtel s IPv6 ECMP patch. Therefore, I needed to configure a custom kernel. I have started from make defconfig, removed everything that was not necessary, added what I needed for my lab (mostly network stuff) and added the appropriate options for VirtIO drivers:
CONFIG_NET_9P_VIRTIO=y
CONFIG_VIRTIO_BLK=y
CONFIG_VIRTIO_NET=y
CONFIG_VIRTIO_CONSOLE=y
CONFIG_HW_RANDOM_VIRTIO=y
CONFIG_VIRTIO=y
CONFIG_VIRTIO_RING=y
CONFIG_VIRTIO_PCI=y
CONFIG_VIRTIO_BALLOON=y
CONFIG_VIRTIO_MMIO=y
No modules. Grab the complete configuration if you want to have a look. From here, you can start your kernel with the following command ($LINUX is the appropriate bzImage):
kvm \
  -m 256m \
  -display none \
  -nodefconfig -no-user-config -nodefaults \
  \
  -chardev stdio,id=charserial0,signal=off \
  -device isa-serial,chardev=charserial0,id=serial0 \
  \
  -chardev socket,id=con0,path=$TMP/vm-$name-console.pipe,server,nowait \
  -mon chardev=con0,mode=readline,default \
  \
  -kernel $LINUX \
  -append "init=/bin/sh console=ttyS0"
Of course, since there is no disk to boot from, the kernel will panic when trying to mount the root filesystem. KVM is configured to not display video output (-display none). A serial port is defined and uses stdio as a backend1. The kernel is configured to use this serial port as a console (console=ttyS0). A VirtIO console could have been used instead but it seems this is not possible to make it work early in the boot process. The KVM monitor is setup to listen on an Unix socket. It is possible to connect to it with socat UNIX:$TMP/vm-$name-console.pipe -.

Initial ramdisk UPDATED: I was initially unable to mount the host filesystem as the root filesystem for the guest directly by the kernel. In a comment, Josh Triplett told me to use /dev/root as the mount tag to solve this problem. I keep using an initrd in this post but the lab on Github has been updated to not use one. Here is how to build a small initial ramdisk:
# Setup initrd
setup_initrd()  
    info "Build initrd"
    DESTDIR=$TMP/initrd
    mkdir -p $DESTDIR
    # Setup busybox
    copy_exec $($WHICH busybox) /bin/busybox
    for applet in $($ DESTDIR /bin/busybox --list); do
        ln -s busybox $ DESTDIR /bin/$ applet 
    done
    # Setup init
    cp $PROGNAME $ DESTDIR /init
    cd "$ DESTDIR " && find .   \
       cpio --quiet -R 0:0 -o -H newc   \
       gzip > $TMP/initrd.gz
 
The copy_exec function is stolen from the initramfs-tools package in Debian. It will ensure that the appropriate libraries are also copied. Another solution would have been to use a static busybox. The setup script is copied as /init in the initial ramdisk. It will detect it has been invoked as such. If it was omitted, a shell would be spawned instead. Remove the cp call if you want to experiment manually. The flag -initrd allows KVM to use this initial ramdisk.

Root filesystem Let s mount our root filesystem using 9P. This is quite easy. First KVM needs to be configured to export the host filesystem to the guest:
kvm \
  $ PREVIOUS_ARGS  \
  -fsdev local,security_model=passthrough,id=fsdev-root,path=$ ROOT ,readonly \
  -device virtio-9p-pci,id=fs-root,fsdev=fsdev-root,mount_tag=rootshare
$ ROOT can either be / or any directory containing a complete filesystem. Mounting it from the guest is quite easy:
mkdir -p /target/ro
mount -t 9p rootshare /target/ro -o trans=virtio,version=9p2000.u
You should find a complete root filesystem inside /target/ro. I have used version=9p2000.u instead of version=9p2000.L because the later does not allow a program to mount() a host mount point2. Now, you have a read-only root filesystem (because you don t want to mess with your existing root filesystem and moreover, you did not run this lab as root, did you?). Let s use an union filesystem. Debian comes with AUFS while Ubuntu and OpenWRT have migrated to overlayfs. I was previously using AUFS but got errors on some specific cases. It is still not clear which one will end up in the kernel. So, let s try overlayfs. I didn t find any patchset ready to be applied on top of my kernel tree. I was working with David Miller s net-next tree. Here is how I have applied the overlayfs patch on top of it:
$ git remote add torvalds git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git
$ git fetch torvalds
$ git remote add overlayfs git://git.kernel.org/pub/scm/linux/kernel/git/mszeredi/vfs.git
$ git fetch overlayfs
$ git merge-base overlayfs.v15 v3.6
4cbe5a555fa58a79b6ecbb6c531b8bab0650778d
$ git checkout -b net-next+overlayfs
$ git cherry-pick 4cbe5a555fa58a79b6ecbb6c531b8bab0650778d..overlayfs.v15
Don t forget to enable CONFIG_OVERLAYFS_FS in .config. Here is how I configured the whole root filesystem:
info "Setup overlayfs"
mkdir /target
mkdir /target/ro
mkdir /target/rw
mkdir /target/overlay
# Version 9p2000.u allows to access /dev, /sys and mount new
# partitions over them. This is not the case for 9p2000.L.
mount -t 9p        rootshare /target/ro      -o trans=virtio,version=9p2000.u
mount -t tmpfs     tmpfs     /target/rw      -o rw
mount -t overlayfs overlayfs /target/overlay -o lowerdir=/target/ro,upperdir=/target/rw
mount -n -t proc  proc /target/overlay/proc
mount -n -t sysfs sys  /target/overlay/sys
info "Mount home directory on /root"
mount -t 9p homeshare /target/overlay/root -o trans=virtio,version=9p2000.L,access=0,rw
info "Mount lab directory on /lab"
mkdir /target/overlay/lab
mount -t 9p labshare /target/overlay/lab -o trans=virtio,version=9p2000.L,access=0,rw
info "Chroot"
export STATE=1
cp "$PROGNAME" /target/overlay
exec chroot /target/overlay "$PROGNAME"
You have to export your $ HOME and the lab directory from host:
kvm \
  $ PREVIOUS_ARGS  \
  -fsdev local,security_model=passthrough,id=fsdev-root,path=$ ROOT ,readonly \
  -device virtio-9p-pci,id=fs-root,fsdev=fsdev-root,mount_tag=rootshare \
  -fsdev local,security_model=none,id=fsdev-home,path=$ HOME  \
  -device virtio-9p-pci,id=fs-home,fsdev=fsdev-home,mount_tag=homeshare \
  -fsdev local,security_model=none,id=fsdev-lab,path=$(dirname "$PROGNAME") \
  -device virtio-9p-pci,id=fs-lab,fsdev=fsdev-lab,mount_tag=labshare

Network You know what is missing from our network lab? Network setup. For each LAN that I will need, I spawn a VDE switch:
# Setup a VDE switch
setup_switch()  
    info "Setup switch $1"
    screen -t "sw-$1" \
        start-stop-daemon --make-pidfile --pidfile "$TMP/switch-$1.pid" \
        --start --startas $($WHICH vde_switch) -- \
        --sock "$TMP/switch-$1.sock"
    screen -X select 0
 
To attach an interface to the newly created LAN, I use:
mac=$(echo $name-$net   sha1sum   \
            awk ' print "52:54:" substr($1,0,2) ":" substr($1, 2, 2) ":" substr($1, 4, 2) ":" substr($1, 6, 2) ')
kvm \
  $ PREVIOUS_ARGS  \
  -net nic,model=virtio,macaddr=$mac,vlan=$net \
  -net vde,sock=$TMP/switch-$net.sock,vlan=$net
The use of a VDE switch allows me to run the lab as a non-root user. It is possible to give Internet access to each VM, either by using -net user flag or using slirpvde on a special switch. I prefer the latest solution since it will allow the VM to speak to each others.

Debugging This lab was mostly done to debug both the kernel and Quagga. Each of them can be debugged remotely.

Kernel debugging While the kernel features KGDB, its own debugger, compatible with GDB, it is easier to use the remote GDB server built inside KVM.
kvm \
  $ PREVIOUS_ARGS  \
  -gdb unix:$TMP/vm-$name-gdb.pipe,server,nowait
To connect to the remote GDB server from the host, first locate the vmlinux file at the root of the source tree and run GDB on it. The kernel has to be compiled with CONFIG_DEBUG_INFO=y to get the appropriate debugging symbols. Then, use socat with the Unix socket to attach to the remote debugger:
$ gdb vmlinux
GNU gdb (GDB) 7.4.1-debian
Reading symbols from /home/bernat/src/linux/vmlinux...done.
(gdb) target remote   socat UNIX:$TMP/vm-$name-gdb.pipe -
Remote debugging using   socat UNIX:/tmp/tmp.W36qWnrCEj/vm-r1-gdb.pipe -
native_safe_halt () at /home/bernat/src/linux/arch/x86/include/asm/irqflags.h:50
50   
(gdb)
You can now set breakpoints and resume the execution of the kernel. It is easier to debug the kernel if optimizations are not enabled. However, it is not possible to disable them globally. You can however disable them for some files. For example, to debug net/ipv6/route.c, just add CFLAGS_route.o = -O0 to net/ipv6/Makefile, remove net/ipv6/route.o and type make.

Userland debugging To debug a program inside KVM, you can just use gdb as usual. Your $HOME directory is available and it should be therefore straightforward. However, if you want to perform some remote debugging, that s quite easy. Add a new serial port to KVM:
kvm \
  $ PREVIOUS_ARGS  \
  -chardev socket,id=charserial1,path=$TMP/vm-$name-serial.pipe,server,nowait \
  -device isa-serial,chardev=charserial1,id=serial1
Starts gdbserver in the guest:
$ libtool execute gdbserver /dev/ttyS1 zebra/zebra
Process /root/code/orange/quagga/build/zebra/.libs/lt-zebra created; pid = 800
Remote debugging using /dev/ttyS1
And from the host, you can attach to the remote process:
$ libtool execute gdb zebra/zebra
GNU gdb (GDB) 7.4.1-debian
Reading symbols from /home/bernat/code/orange/quagga/build/zebra/.libs/lt-zebra...done.
(gdb) target remote   socat UNIX:/tmp/tmp.W36qWnrCEj/vm-r1-serial.pipe
Remote debugging using   socat UNIX:/tmp/tmp.W36qWnrCEj/vm-r1-serial.pipe
Reading symbols from /lib64/ld-linux-x86-64.so.2...(no debugging symbols found)...done.
Loaded symbols for /lib64/ld-linux-x86-64.so.2
0x00007ffff7dddaf0 in ?? () from /lib64/ld-linux-x86-64.so.2
(gdb)

Demo For a demo, have a look at the following video (it is also available as an Ogg Theora video).
<iframe frameborder="0" height="270" src="http://www.dailymotion.com/embed/video/xuglsg" width="480"></iframe>

  1. stdio is configured such that signals are not enabled. KVM won t stop when receiving SIGINT. This is important for the usage we want to have.
  2. Therefore, it is not possible to mound a fresh /proc on top of the existing one. I have searched a bit but didn t find why. Any comments on this is welcome.

18 September 2012

Martin Pitt: PyGObject 3.3.92 released

I just released PyGObject 3.3.92, for GNOME 3.5.92. There is nothing too exciting in this release; a couple of small bug fixes and a lot of new test cases. See the detailled list of changes below. Thanks to all contributors! Changes:

29 July 2012

Gregor Herrmann: RC bugs 2012/27-30

during the last weeks I was quite busy with other things (like DebCamp, DebConf, & vacations), so this is a report covering 4 weeks. at least I managed to catch up during the last days a bit

Next.

Previous.